Convergence properties of mixed-norm algorithms under general error criteria

نویسندگان

  • Tareq Y. Al-Naffouri
  • Azzedine Zerguine
  • Maamar Bettayeb
چکیده

The convergence properties of mixed-norm algorithms as i t applies to echo cancelers under general error criteria is derived for correlated and identically distributed inputs. The convergence analysis of this class of algorithms i s carried out using the linearization process of the error nonlinearities. Necessary and suficient conditions for convergence are derived for the independent input case. where f ( e ( k ) ) and g ( e ( k ) ) are the error nonlinearities, W N ~ and W F ~ are the true impulse responses of the nearend and far-end sections, respectively, and where P N and P F are the step sizes of the near-end and far-end sections, respectively. The error is &fined by e ( k ) = n ( k ) v L ( k ) x N ( k ) v > ( k ) x F ( k ) , ( 3 ) where n ( k ) is the additive noise.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Analytical Model for Predicting the Convergence Behavior of the Least Mean Mixed-Norm (LMMN) Algorithm

The Least Mean Mixed-Norm (LMMN) algorithm is a stochastic gradient-based algorithm whose objective is to minimum a combination of the cost functions of the Least Mean Square (LMS) and Least Mean Fourth (LMF) algorithms. This algorithm has inherited many properties and advantages of the LMS and LMF algorithms and mitigated their weaknesses in some ways. The main issue of the LMMN algorithm is t...

متن کامل

Parameter determination in a parabolic inverse problem in general dimensions

It is well known that the parabolic partial differential equations in two or more space dimensions with overspecified boundary data, feature in the mathematical modeling of many phenomena. In this article, an inverse problem of determining an unknown time-dependent source term of a parabolic equation in general dimensions is considered. Employing some transformations, we change the inverse prob...

متن کامل

On the modified iterative methods for $M$-matrix linear systems

This paper deals with scrutinizing the convergence properties of iterative methods to solve linear system of equations. Recently, several types of the preconditioners have been applied for ameliorating the rate of convergence of the Accelerated Overrelaxation (AOR) method. In this paper, we study the applicability of a general class of the preconditioned iterative methods under certain conditio...

متن کامل

On the optimality of neural-network approximation using incremental algorithms

The problem of approximating functions by neural networks using incremental algorithms is studied. For functions belonging to a rather general class, characterized by certain smoothness properties with respect to the L2 norm, we compute upper bounds on the approximation error where error is measured by the Lq norm, 1< or =q< or =infinity. These results extend previous work, applicable in the ca...

متن کامل

Error analysis for online gradient descent algorithms in reproducing kernel Hilbert spaces†

We consider online gradient descent algorithms with general convex loss functions in reproducing kernel Hilbert spaces (RKHS). These algorithms offer an advantageous way for learning from large training sets. We provide general conditions ensuring convergence of the algorithm in the RKHS norm. Explicit generalization error rates for q-norm ε-insensitive regression loss are given by choosing the...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999